Improve Assist LLM entity targeting#169636
Conversation
|
Please take a look at the requested changes, and use the Ready for review button when you are done, thanks 👍 |
There was a problem hiding this comment.
Pull request overview
This PR extends the Assist LLM API so conversation agents can target specific exposed Home Assistant entities by exact entity_id, instead of relying only on fuzzy intent matching. It updates the Assist prompt/context generation in homeassistant/helpers/llm.py and adds/adjusts tests in tests/helpers/test_llm.py to cover the new tool exposure and prompt format.
Changes:
- Adds exact-entity control tools for
turn_on/turn_offand exposesentity_idvalues in the static Assist context. - Rejects missing, nonexistent, or unexposed entities before dispatching exact-entity service calls.
- Updates LLM helper tests to expect the new tools and the enriched static prompt.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
homeassistant/helpers/llm.py |
Adds exact entity_id control tools, updates Assist prompt guidance, and includes entity_id in static exposed-entity context. |
tests/helpers/test_llm.py |
Updates Assist API prompt/tool assertions and adds a new exact-entity control test. |
| entity_ids = [ | ||
| cv.entity_id(entity_id) | ||
| for entity_id in cv.ensure_list(tool_input.tool_args[ATTR_ENTITY_ID]) |
| tools.extend( | ||
| [ | ||
| EntityControlTool(SERVICE_TURN_ON), | ||
| EntityControlTool(SERVICE_TURN_OFF), | ||
| ] |
| } | ||
|
|
||
|
|
||
| async def test_entity_control_tool_uses_only_exposed_entities( |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 303c97dff0
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| failed[entity_id] = "Entity is not exposed to this assistant" | ||
| continue | ||
|
|
||
| valid_entity_ids.append(entity_id) |
There was a problem hiding this comment.
Exclude unsupported domains from entity control results
This loop treats every exposed entity with a current state as actionable, but homeassistant.turn_on/turn_off only executes per-domain services and silently skips unsupported domains (it just logs a warning in homeassistant/components/homeassistant/__init__.py, lines 148-173). As a result, if the LLM passes an exposed non-turnable entity (for example a sensor), this tool can return success: true and include it in done even though no action was performed, causing incorrect confirmations to users.
Useful? React with 👍 / 👎.
303c97d to
dae9d9f
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: dae9d9fd34
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| try: | ||
| entity_ids = [ | ||
| cv.entity_id(entity_id) | ||
| for entity_id in cv.ensure_list(tool_input.tool_args[ATTR_ENTITY_ID]) |
There was a problem hiding this comment.
Guard missing entity_id before reading tool arguments
EntityControlTool.async_call indexes tool_input.tool_args["entity_id"] directly, so malformed tool calls that omit entity_id raise KeyError instead of returning a structured tool error. In the conversation pipeline, chat_log only converts HomeAssistantError and vol.Invalid into tool results, so this uncaught exception can abort the tool-call flow instead of letting the assistant recover. This is especially likely with LLM-generated calls where required args are occasionally missing.
Useful? React with 👍 / 👎.
dae9d9f to
37389c8
Compare
| entity_ids = [ | ||
| cv.entity_id(entity_id) | ||
| for entity_id in cv.ensure_list(tool_input.tool_args[ATTR_ENTITY_ID]) |
| for entity_id in entity_ids: | ||
| if hass.states.get(entity_id) is None: | ||
| failed[entity_id] = "Entity does not exist" | ||
| continue | ||
|
|
||
| if llm_context.assistant and not async_should_expose( | ||
| hass, llm_context.assistant, entity_id | ||
| ): | ||
| failed[entity_id] = "Entity is not exposed to this assistant" | ||
| continue | ||
|
|
||
| valid_entity_ids.append(entity_id) | ||
|
|
||
| if valid_entity_ids: | ||
| await hass.services.async_call( | ||
| HOMEASSISTANT_DOMAIN, | ||
| self._service_name, | ||
| {ATTR_ENTITY_ID: valid_entity_ids}, |
37389c8 to
54fa67b
Compare
Summary
Improves the Assist LLM API for specific entity control:
HassEntityTurnOnandHassEntityTurnOfftools for exact exposedentity_idcontrolThis gives LLM-backed conversation agents a canonical path for commands that refer to a specific entity when fuzzy intent matching is ambiguous.
Validation
python3 -m py_compile homeassistant/helpers/llm.py tests/helpers/test_llm.py/tmp/ha-core-test-venv/bin/ruff check homeassistant/helpers/llm.py tests/helpers/test_llm.pyTargeted pytest could not be run locally in my Python 3.13 environment because this checkout uses syntax that is not accepted by Python 3.13 before test collection. CI should run in the project-supported environment.